Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some features that I needed when working on deformableDETR #320

Merged
merged 5 commits into from
Feb 9, 2023

Conversation

Dee61298
Copy link
Contributor

@Dee61298 Dee61298 commented Feb 2, 2023

Some features that I needed when working on deformableDETR

  • New feature 1 : Added "sequential" arg to Data2Detr datamodule
    Uses sequential sampler for train_dataloader instead of random sampler.
    If you run this command, you should first see some lunchboxes with some broccoli inside, and then a flower bouquet.
python data_modules/coco_detection2detr.py --sequential
  • New feature 2 : The layers are instanciated in order in DETR and DeformableDETR
    Before, when running "print(model)" on either of the models, the layers would be displayed out of order, which was confusing (i.e transformer before backbone). Now they are displayed in order.
from argparse import ArgumentParser
from alonet.detr import LitDetr

import alonet


def get_arg_parser():
    parser = ArgumentParser(conflict_handler="resolve")
    parser = alonet.common.add_argparse_args(parser)  # Common alonet parser
    parser = LitDetr.add_argparse_args(parser)  # LitDetr training parser
    # parser = pl.Trainer.add_argparse_args(parser) # Pytorch lightning Parser
    return parser


def main():
    """Main"""
    # Build parser
    args = get_arg_parser().parse_args()  # Parse

    # Init the Detr model with the dataset
    detr = LitDetr(args)
    print(detr.model)

if __name__ == "__main__":
    main()
  • New feature 3 : Added spatial_shift to mask
    The function is empty for now because its early implementation would trigger unforeseen bugs in DeformableDETR. However, having the function is necessary, because it triggers an error message instead when using Spatial Shift for data augmentation.
    If you run this example without the modification, you'll get a "raise Exception(f"This Augmented tensor {type(self)} should implement this method")". If you run the example with the modification, you'll get no errors.
import aloscene
import torch
import alodataset.transforms as T

x=torch.zeros(3,256,256)
frame=aloscene.Frame(x,names=("C","H","W"))
mask=aloscene.Mask(torch.randint(0,2,(1,256,256)),names=("C","H","W"))
frame.append_mask(mask)

newframe=T.SpatialShift((10,10))(frame)
  • New feature 4 : Added a hidden_dim argument to build_decoder
    I wanted to be able to reduce the hidden dimension of the transformer to make a lightweight model, but this dimension was hardcoded in the decoder. I added the argument to be able to change it.
from alonet.deformable_detr.deformable_detr import DeformableDETR
from alonet.deformable_detr.backbone import Joiner


class DeformableDetrCustom(DeformableDETR):
    """Deformable Detr with custom transformer. For testing purposes only"""

    def __init__(self, *args, hidden_dim=256, return_intermediate_dec: bool = True, num_classes=91, **kwargs):
        backbone = self.build_backbone(
            backbone_name="resnet50", train_backbone=True, return_interm_layers=True, dilation=False
        )
        position_embed = self.build_positional_encoding(hidden_dim=256)
        backbone = Joiner(backbone, position_embed)
        transformer = self.build_transformer(
            hidden_dim=hidden_dim,
            dropout=0.1,
            nheads=8,
            dim_feedforward=1024,
            enc_layers=6,
            dec_layers=6,
            num_feature_levels=4,
            dec_n_points=4,
            enc_n_points=4,
            return_intermediate_dec=return_intermediate_dec,
        )

        super().__init__(backbone, transformer, *args, num_classes=num_classes, with_box_refine=False, **kwargs)

if __name__ == "__main__":
    model=DeformableDetrCustom(weights=None)
    print(model.transformer.decoder.layers[0].cross_attn.sampling_offsets) #in_features should be 256

    model2=DeformableDetrCustom(hidden_dim=128,weights=None)
    print(model2.transformer.decoder.layers[0].cross_attn.sampling_offsets) #in_features should be 128

This pull request includes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

@Dee61298 Dee61298 marked this pull request as ready for review February 3, 2023 08:58
@Dee61298 Dee61298 requested a review from Data-Iab February 8, 2023 10:08
alonet/deformable_detr/deformable_detr.py Outdated Show resolved Hide resolved
alonet/detr/data_modules/coco_detection2detr.py Outdated Show resolved Hide resolved
alonet/detr/detr.py Outdated Show resolved Hide resolved
alonet/deformable_detr/deformable_detr.py Outdated Show resolved Hide resolved
@Data-Iab Data-Iab merged commit c825fc5 into dev Feb 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants